语言,视觉和多模式预审查的大量融合正在出现。在这项工作中,我们介绍了通用多模式基础模型BEIT-3,该模型BEIT-3,该模型在视觉和视觉任务上都实现了最新的转移性能。具体来说,我们从三个方面提出了大融合:骨干架构,预训练任务和模型扩展。我们介绍了多道路变压器进行通用建模,其中模块化体系结构可以实现深融合和模态特定的编码。基于共享的骨干,我们以统一的方式对图像(Imglish),文本(英语)和图像文本对(“平行句子”)进行蒙面的“语言”建模。实验结果表明,BEIT-3在对象检测(COCO),语义分割(ADE20K),图像分类(Imagenet),视觉推理(NLVR2),视觉询问答案(VQAV2),图像字幕上获得最先进的性能(可可)和跨模式检索(Flickr30k,可可)。
translated by 谷歌翻译
在各种科学和临床环境中,快速无创探测空间变化的非相关事件(例如人类头骨下方的脑血流)是一项必不可少的任务。所使用的主要光学技术之一是弥漫性相关光谱(DC),其经典实现使用单个或几个单光子检测器,导致空间定位精度较差,时间分辨率相对较低。 Here, we propose a technique termed Classifying Rapid decorrelation Events via Parallelized single photon dEtection (CREPE)}, a new form of DCS that can probe and classify different decorrelating movements hidden underneath turbid volume with high sensitivity using parallelized speckle detection from a $32\times32 $像素SPAD阵列。我们通过对隐藏在5mm组织样的幻影下的不同时空 - 偏置模式进行分类来评估我们的设置,该模式由快速反相关的动态散射介质制成。十二个多模式纤维用于从组织幻影表面的不同位置收集散射光。为了验证我们的设置,我们通过在Multi-Kilo-Hertz速率下调制的数字微龙器设备(DMD)以及含有流动流体的容器幻影。除了具有胜过经典无监督学习方法的深层对比学习算法外,我们证明我们的方法可以准确地检测和分类浊度散射介质下的不同瞬态去相关事件(发生在0.1-0.4s中),而无需任何数据标记。这有可能应用于非侵入性的深层组织运动模式,例如在紧凑和静态检测探针内以多赫兹速率识别正常或异常的脑血流事件。
translated by 谷歌翻译
我们提出了一种跨模型关注蒸馏框架,用于培训双编码器模型,用于了解视觉语言理解任务,例如视觉推理和视觉问题应答。双编码器模型的推理速度比Fusion-encoder模型更快,并在推理期间启用图像和文本的预算。然而,双编码器模型中使用的浅交互模块不足以处理复杂的视觉语言理解任务。为了学习图像和文本的深度互动,我们引入了跨模型注意蒸馏,它使用融合编码器模型的图像到文本和文本到图像注意力分布来指导我们的双编码器的培训模型。此外,我们表明,适用于预训练和微调阶段的跨模型注意蒸馏实现了进一步的改进。实验结果表明,蒸馏的双编码器模型可实现视觉推理,视觉征求和视觉问题的竞争性能,同时享受比Fusion-Conoder模型更快的推理速度。我们的代码和型号将在https://github.com/kugwzk/distilled -dualiCoder上公开提供。
translated by 谷歌翻译
深度神经网络通常需要准确和大量注释,以在医学图像分割中实现出色的性能。单次分割和弱监督学习是有前途的研究方向,即通过仅从一个注释图像学习新类并利用粗标签来降低标签努力。以前的作品通常未能利用解剖结构并遭受阶级不平衡和低对比度问题。因此,我们为3D医学图像分割的创新框架提供了一次性和弱监督的设置。首先,提出了一种传播重建网络,以基于不同人体中的解剖模式类似的假设将来自注释体积的划痕投射到未标记的3D图像。然后,双级功能去噪模块旨在基于解剖结构和像素级别来改进涂鸦。在将涂鸦扩展到伪掩码后,我们可以使用嘈杂的标签培训策略培训新课程的分段模型。一个腹部的实验和一个头部和颈部CT数据集显示所提出的方法对最先进的方法获得显着改善,即使在严重的阶级不平衡和低对比度下也能够稳健地执行。
translated by 谷歌翻译
尽管深度神经网络(DNN)在各种应用中取得了突出的性能,但众所周知,DNN易于在清洁/原始样品中具有难以察觉的扰动的对抗性实施例/样品(AES)。克服对抗对抗攻击的现有防御方法的弱点,这破坏了原始样本的信息,导致目标分类器精度的减少,提高了增强的反对对抗攻击方法IDFR(通过输入去噪和功能恢复) 。所提出的IDFR是由增强型输入丹麦优化的增强型输入丹麦(ID)和隐藏的有损特征恢复器(FR)组成。在基准数据集上进行的广泛实验表明,所提出的IDFR优于各种最先进的防御方法,对保护目标模型免受各种对抗黑盒或白盒攻击的高度有效。 \脚注{souce代码释放:\ href {https://github.com/id-fr/idfr} {https://github.com/id-fr/idfr}}
translated by 谷歌翻译
通过动态散射介质进行非侵入性光学成像具有许多重要的生物医学应用,但仍然是一项艰巨的任务。尽管标准弥漫成像方法测量光吸收或荧光发射,但也良好的是,散射的相干光的时间相关性通过组织像光强度一样扩散。然而,迄今为止,很少有作品旨在通过实验测量和处理这种时间相关数据,以证明去相关动力学的深度组织视频重建。在这项工作中,我们利用单光子雪崩二极管(SPAD)阵列摄像机同时监视单photon水平的斑点波动的时间动力学,从12种不同的幻影组织通过定制的纤维束阵列传递的位置。然后,我们应用深度神经网络将所获得的单光子测量值转换为迅速去摩擦组织幻像下散射动力学的视频。我们证明了重建瞬态(0.1-0.4s)动态事件的图像的能力,该动态事件发生在非相关的组织幻影下,并以毫米级分辨率进行重构,并突出显示我们的模型如何灵活地扩展到埋藏的phantom船只内的流速。
translated by 谷歌翻译
Despite some successful applications of goal-driven navigation, existing deep reinforcement learning-based approaches notoriously suffers from poor data efficiency issue. One of the reasons is that the goal information is decoupled from the perception module and directly introduced as a condition of decision-making, resulting in the goal-irrelevant features of the scene representation playing an adversary role during the learning process. In light of this, we present a novel Goal-guided Transformer-enabled reinforcement learning (GTRL) approach by considering the physical goal states as an input of the scene encoder for guiding the scene representation to couple with the goal information and realizing efficient autonomous navigation. More specifically, we propose a novel variant of the Vision Transformer as the backbone of the perception system, namely Goal-guided Transformer (GoT), and pre-train it with expert priors to boost the data efficiency. Subsequently, a reinforcement learning algorithm is instantiated for the decision-making system, taking the goal-oriented scene representation from the GoT as the input and generating decision commands. As a result, our approach motivates the scene representation to concentrate mainly on goal-relevant features, which substantially enhances the data efficiency of the DRL learning process, leading to superior navigation performance. Both simulation and real-world experimental results manifest the superiority of our approach in terms of data efficiency, performance, robustness, and sim-to-real generalization, compared with other state-of-art baselines. Demonstration videos are available at \colorb{https://youtu.be/93LGlGvaN0c.
translated by 谷歌翻译
Despite the remarkable success achieved by graph convolutional networks for functional brain activity analysis, the heterogeneity of functional patterns and the scarcity of imaging data still pose challenges in many tasks. Transferring knowledge from a source domain with abundant training data to a target domain is effective for improving representation learning on scarce training data. However, traditional transfer learning methods often fail to generalize the pre-trained knowledge to the target task due to domain discrepancy. Self-supervised learning on graphs can increase the generalizability of graph features since self-supervision concentrates on inherent graph properties that are not limited to a particular supervised task. We propose a novel knowledge transfer strategy by integrating meta-learning with self-supervised learning to deal with the heterogeneity and scarcity of fMRI data. Specifically, we perform a self-supervised task on the source domain and apply meta-learning, which strongly improves the generalizability of the model using the bi-level optimization, to transfer the self-supervised knowledge to the target domain. Through experiments on a neurological disorder classification task, we demonstrate that the proposed strategy significantly improves target task performance by increasing the generalizability and transferability of graph-based knowledge.
translated by 谷歌翻译
Airport runway segmentation can effectively reduce the accident rate during the landing phase, which has the largest risk of flight accidents. With the rapid development of deep learning, related methods have good performance on segmentation tasks and can be well adapted to complex scenes. However, the lack of large-scale, publicly available datasets in this field makes the development of methods based on deep learning difficult. Therefore, we propose a Benchmark for Airport Runway Segmentation, named BARS. Meanwhile, a semi-automatic annotation pipeline is designed to reduce the workload of annotation. BARS has the largest dataset with the richest categories and the only instance annotation in the field. The dataset, which is collected using the X-Plane simulation platform, contains 10,002 images and 29,347 instances with three categories. We evaluate eight representative instance segmentation methods on BARS and analyze their performance. Based on the characteristic of the airport runway with a regular shape, we propose a plug-and-play smoothing post-processing module (SPPM) and a contour point constraint loss (CPCL) function to smooth segmentation results for mask-based and contour-based methods, respectively. Furthermore, a novel evaluation metric named average smoothness (AS) is developed to measure smoothness. The experiments show that existing instance segmentation methods can achieve prediction results with good performance on BARS. SPPM and CPCL can improve the average accuracy by 0.9% and 1.13%, respectively. And the average smoothness enhancements for SPPM and CPCL are more than 50% and 28%, respectively.
translated by 谷歌翻译
This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics. We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features with a U-Net model as the primary node feature extraction module, followed by a successive spectral-based graph convolutional network. To create a diverse set of filters, we use anisotropic wavelet basis filters, being sensitive to both different directions and band-passes. This filter set overcomes the over-smoothing behavior of conventional graph neural networks. To further improve the model's performance, we add a function that perturbs the feature maps in the last layer ahead of fully connected layers, forcing the network to learn more discriminative features overall. The resulting correspondence maps show state-of-the-art performance on the benchmark datasets based on average geodesic errors and superior robustness to discretization in 3D meshes. Our approach provides new insights and practical solutions to the dense shape correspondence research.
translated by 谷歌翻译